41 research outputs found
Recommended from our members
A Taxonomy and Survey of Attacks Against Machine Learning
The majority of machine learning methodologies operate with the assumption that their environment is benign. However, this assumption does not always hold, as it is often advantageous to adversaries to maliciously modify the training (poisoning attacks) or test data (evasion attacks). Such attacks can be catastrophic given the growth and the penetration of machine learning applications in society. Therefore, there is a need to secure machine learning enabling the safe adoption of it in adversarial cases, such as spam filtering, malware detection, and biometric recognition. This paper presents a taxonomy and survey of attacks against systems that use machine learning. It organizes the body of knowledge in adversarial machine learning so as to identify the aspects where researchers from different fields can contribute to. The taxonomy identifies attacks which share key characteristics and as such can potentially be addressed by the same defense approaches. Thus, the proposed taxonomy makes it easier to understand the existing attack landscape towards developing defence mechanisms, which are not investigated in this survey. The taxonomy is also leveraged to identify open problems that can lead to new research areas within the field of adversarial machine learning
ZEKRO: Zero-Knowledge Proof of Integrity Conformance
In the race towards next-generation systems of systems, the adoption of edge and cloud computing is escalating to deliver the un- derpinning end-to-end services. To safeguard the increasing attack landscape, remote attestation lets a verifier reason about the state of an untrusted remote prover. However, for most schemes, verifiability is only established under the omniscient and trusted verifier assumption where a verifier knows the proverâs trusted states and the prover must reveal evidence about its current state. This assumption severely challenges upscaling, inherently limits eligible verifiers, and naturally prohibits adoption in public-facing security-critical networks. To meet current zero trust paradigms, we propose a general ZEro-Knowledge pRoof of cOnformance (ZEKRO) scheme, which considers mutually distrusting participants and enables a prover to convince an untrusted verifier about the correctness of its state in zero-knowledge by ensuring that the prover cannot cheat.First submission versio
RETRACT: Expressive Designated Verifier Anonymous Credentials
Anonymous credentials (ACs) are secure digital versions of credentials that allow selective proof of possession of encoded attributes without revealing additional information. Attributes can include basic personal details (e.g., passport, medical records) and also claims about existing attributes (e.g., age > 18), which can be revealed without disclosing any concrete information. However, embedding all possible claims in a credential is impractical. To address this, we propose verifiers defining policies as high-level programs executed by holders on their credentials. We also propose making the proofs designated verifier to prevent the misuse or leakage of sensitive information by dishonest verifiers to any unwanted third party.Grants: RESEARCH-CREATE-INNOVAT
Android Privacy C(R)ache : Reading your External Storageand Sensors for Fun and Profit
Android's permission system empowers informed privacy decisions when installing third-party applications. However, examining the access permissions is not enough to assess privacy exposure; even seemingly harmless applications can severely expose user data. This is what we demonstrate here: an application with the common READ_EXTERNAL_STORAGE and the INTERNET permissions can be the basis of extracting and inferring a wealth of private information. What has been overlooked is that such a ``curious'' application can prey on data stored in the Android's commonly accessible external storage or on unprotected phone sensors. By accessing and stealthily extracting data thought to be unworthy of protection, we manage to access highly sensitive information: user identifiers and habits. Leveraging data-mining techniques, we explore a set of popular applications, establishing that there is a clear privacy danger for numerous users installing innocent-looking and but, possibly, ``curious'' applications.QC 20160129</p
Android Privacy C(R)ache : Reading your External Storageand Sensors for Fun and Profit
Android's permission system empowers informed privacy decisions when installing third-party applications. However, examining the access permissions is not enough to assess privacy exposure; even seemingly harmless applications can severely expose user data. This is what we demonstrate here: an application with the common READ_EXTERNAL_STORAGE and the INTERNET permissions can be the basis of extracting and inferring a wealth of private information. What has been overlooked is that such a ``curious'' application can prey on data stored in the Android's commonly accessible external storage or on unprotected phone sensors. By accessing and stealthily extracting data thought to be unworthy of protection, we manage to access highly sensitive information: user identifiers and habits. Leveraging data-mining techniques, we explore a set of popular applications, establishing that there is a clear privacy danger for numerous users installing innocent-looking and but, possibly, ``curious'' applications.QC 20160129</p